Intent classification and slot filling are two core tasks in natural language understanding (NLU). The interaction nature of the two tasks makes the joint models often outperform the single designs. One of the promising solutions, called BERT (Bidirectional Encoder Representations from Transformers), achieves the joint optimization of the two tasks. BERT adopts the wordpiece to tokenize each input token into multiple sub-tokens, which causes a mismatch between the tokens and the labels lengths. Previous methods utilize the hidden states corresponding to the first sub-token as input to the classifier, which limits performance improvement since some hidden semantic informations is discarded in the fine-tune process. To address this issue, we propose a novel joint model based on BERT, which explicitly models the multiple sub-tokens features after wordpiece tokenization, thereby generating the context features that contribute to slot filling. Specifically, we encode the hidden states corresponding to multiple sub-tokens into a context vector via the attention mechanism. Then, we feed each context vector into the slot filling encoder, which preserves the integrity of the sentence. Experimental results demonstrate that our proposed model achieves significant improvement on intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy on two public benchmark datasets. The F1 score of the slot filling in particular has been improved from 96.1 to 98.2 (2.1% absolute) on the ATIS dataset.
translated by 谷歌翻译
Zero-shot relation triplet extraction (ZeroRTE) aims to extract relation triplets from unstructured texts under the zero-shot setting, where the relation sets at the training and testing stages are disjoint. Previous state-of-the-art method handles this challenging task by leveraging pretrained language models to generate data as additional training samples, which increases the training cost and severely constrains the model performance. To address the above issues, we propose a novel method named PCRED for ZeroRTE with Potential Candidate Relation Selection and Entity Boundary Detection. The remarkable characteristic of PCRED is that it does not rely on additional data and still achieves promising performance. The model adopts a relation-first paradigm, recognizing unseen relations through candidate relation selection. With this approach, the semantics of relations are naturally infused in the context. Entities are extracted based on the context and the semantics of relations subsequently. We evaluate our model on two ZeroRTE datasets. The experiment results show that our method consistently outperforms previous works. Our code will be available at https://anonymous.4open.science/r/PCRED.
translated by 谷歌翻译
部分标签学习(PLL)是一项奇特的弱监督学习任务,其中训练样本通常与一组候选标签而不是单个地面真理相关联。尽管在该域中提出了各种标签歧义方法,但他们通常假设在许多现实世界应用中可能不存在类平衡的方案。从经验上讲,我们在面对长尾分布和部分标记的组合挑战时观察到了先前方法的退化性能。在这项工作中,我们首先确定先前工作失败的主要原因。随后,我们提出了一种新型的基于最佳运输的框架太阳能,它允许完善被歧义的标签,以匹配边缘级别的先验分布。太阳能还结合了一种新的系统机制,用于估计PLL设置下的长尾类先验分布。通过广泛的实验,与先前的最先进的PLL方法相比,太阳能在标准化基准方面表现出基本优势。代码和数据可在以下网址获得:https://github.com/hbzju/solar。
translated by 谷歌翻译
随着强大图像编辑工具的广泛使用,图像篡改变得容易且现实。现有的图像法医方法仍然面临低准确性和鲁棒性的挑战。请注意,篡改区域通常是语义对象,在这封信中,我们提出了一个基于深层语义分割网络的有效图像篡改本地化方案。Consnext网络被用作编码器,以学习更好的功能表示。然后,Upernet解码器融合了多尺度功能,以实现更好的定位功能。采用合并损失和有效的数据扩展以确保有效的模型培训。广泛的实验结果证实,我们提出的方案的本地化性能优于其他最先进的方案。
translated by 谷歌翻译
我们提出了一种精确,有效的正常估计方法,可以处理非结构化3D点云的噪声和不均匀密度。与直接采用补丁并忽略当地邻里关系的现有方法不同,这使它们容易受到诸如尖锐边缘等挑战区域的影响,我们建议学习以正常估计的图形卷积特征表示,该图表强调了更多的本地邻里几何形状,并有效地编码了内在关系。此外,我们根据注意机制设计了一种新型的自适应模块,以将点特征与其相邻特征整合在一起,从而进一步增强了提出的正常估计器对点密度变化的鲁棒性。为了使其更有区别,我们在图形块中引入了多尺度体系结构,以学习更丰富的几何特征。我们的方法以各种基准数据集的最先进的精度优于竞争对手,并且对噪声,异常值以及密度变化非常有力。
translated by 谷歌翻译
理由定义为最能解释或支持机器学习模型预测的输入功能的子集。基本原理识别改善了神经网络在视觉和语言数据上的普遍性和解释性。在诸如分子和聚合物属性预测之类的图应用中,识别称为图理由的代表性子图结构在图神经网络的性能中起着至关重要的作用。现有的图形合并和/或分发干预方法缺乏示例,无法学习确定最佳图理由。在这项工作中,我们介绍了一个名为“环境替代”的新的增强操作,该操作自动创建虚拟数据示例以改善基本原理识别。我们提出了一个有效的框架,该框架在潜在空间中对真实和增强的示例进行基本环境分离和表示学习,以避免显式图解码和编码的高复杂性。与最近的技术相比,对七个分子和四个聚合物实际数据集进行的实验证明了拟议的基于增强的图形合理化框架的有效性和效率。
translated by 谷歌翻译
最近,利用BERT预训练以改善文本到语音(TTS)中的音素编码器引起了人们的注意。但是,这些作品将使用基于字符的单元进行预训练以增强TTS音素编码器,这与将音素作为输入的TTS微调不一致。仅以音素作为输入的预训练可以减轻输入不匹配,但由于音素词汇量有限,因此缺乏对丰富表示形式和语义信息进行建模的能力。在本文中,我们提出了混合Phoneme Bert,这是BERT模型的新型变体,该模型使用混合音素和SUP-PHONEME表示来增强学习能力。具体而言,我们将相邻的音素合并为sup-phonemes,并将音素序列和合并后的sup-phoneme序列与模型输入相结合,这可以增强学习丰富的上下文表示的模型能力。实验结果表明,与FastSpeeCh 2基线相比,我们提出的混合词BERT可以显着改善TTS性能,并以0.30 CMOS增益提高了TTS性能。混合词BERT达到3倍推理加速度和与先前TTS预训练的模型PNG Bert相似的语音质量
translated by 谷歌翻译
预测中小型企业(SME)的破产风险(SME)是金融机构在做出贷款时的重要一步。但是,金融和AI研究领域的现有研究倾向于仅考虑企业内风险或传染性风险,而忽略了它们的相互作用和组合效应。这项研究首次考虑了在破产预测中的风险及其共同影响。具体而言,我们首先根据其风险内学习的统计学意义企业风险指标提出了企业内风险编码器。然后,我们根据企业关系信息从企业知识图中提出了一个企业传染风险编码器,以进行其传染风险嵌入。特别是,传染风险编码器既包括新提出的高图神经网络和异质图神经网络,这些神经网络可以在两个不同方面建模传播风险,即基于超系统的常见风险因素和直接扩散的风险。为了评估该模型,我们收集了SME上的现实世界多源数据数据,并构建了一个名为SMESD的新型基准数据集。我们提供对数据集的开放访问权限,该数据集有望进一步促进财务风险分析的研究。针对十二个最先进的基线的SMESD实验证明了拟议模型对破产预测的有效性。
translated by 谷歌翻译
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we bridge the gap by addressing two key research challenges in PLL -- representation learning and label disambiguation -- in one coherent framework. Specifically, our proposed framework PiCO consists of a contrastive learning module along with a novel class prototype-based label disambiguation algorithm. PiCO produces closely aligned representations for examples from the same classes and facilitates label disambiguation. Theoretically, we show that these two components are mutually beneficial, and can be rigorously justified from an expectation-maximization (EM) algorithm perspective. Moreover, we study a challenging yet practical noisy partial label learning setup, where the ground-truth may not be included in the candidate set. To remedy this problem, we present an extension PiCO+ that performs distance-based clean sample selection and learns robust classifiers by a semi-supervised contrastive learning algorithm. Extensive experiments demonstrate that our proposed methods significantly outperform the current state-of-the-art approaches in standard and noisy PLL tasks and even achieve comparable results to fully supervised learning.
translated by 谷歌翻译
股票运动预测(SMP)旨在预测上市公司的股份量股份,由于金融市场的挥发性,这是一个具有挑战性的任务。最近的财务研究表明,动量溢出效应在股票波动中发挥着重要作用。然而,以前的研究通常只学习相关公司之间的简单连接信息,这不可避免地未能模仿真实金融市场中上市公司的复杂关系。为了解决这个问题,我们首先建立一个更全面的市场知识图(MKG),其中包含有限的公司,包括上市公司及其相关的高管,以及包括明确关系和隐性关系的混合关系。之后,我们提出了一种新颖的双重关注网络,以了解基于构造的MKG用于库存预测的势头溢出信号。对九个SOTA基线构建数据集的实证实验表明,所提出的丹林公司能够改善与构造的MKG的库存预测。
translated by 谷歌翻译